Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
A customer wanted to know the maximum number of values that can be stored in a single registry key. They found that they ran into problems when they reached a certain number of values, which was well over a quarter million.
Okay, wait a second. Why are you adding over a quarter million values to a registry key!?
The customer explained that they mark every file in their installer as msidbComponentAttributesSharedDllRefCount, to avoid the problem described in the documentation. And when I said every file, I really meant every file. Not just DLLs, but also text files, GIFs, XML files, everything. Just the names of the keys adds up to over 30 megabytes.
Since their product supports multiple versions installed side-by-side, installing multiple versions of their product accumulates values in the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SharedDLLs registry key.
The customer saw the story about problems if you forget to mark a shared file as msidbComponentAttributesSharedDllRefCount, and decided that they are going to fix it by saying that every single file should go into SharedDLLs. But that’s the wrong lesson.
The lesson is “If a file is shared, then mark it as shared.” And “shared” means “multiple products use the same DLL installed into the same directory” (such as the system32 directory or the C:\Program Files\Common Files\Contoso\ directory). Since the customer says that their programs install side-by-side, there are unlikely to be any shared files at all! They probably can just remove the msidbComponentAttributesSharedDllRefCount attribute from all of their files.
The SharedDLLs registry was created in Windows 95 as one of many attempts to address the problem of DLL management when multiple products all want to install the same DLL (for example, the C runtime library). Any DLL that was shared would be registered in the SharedDLLs registry key with a “usage count”. An installer would increment the count, and an uninstaller would decrement it.
Now, this addressed only the “keeping track of when it is safe to delete a DLL uninstalling” problem. It doesn’t do anything to solve the “multiple versions of the same DLL” problem. For that, the assumption was that (1) installers would compare the version number of the DLL already on the system with the version they want to install, and replace the existing file only if the new file is a higher version nunber; and with that policy, you also have (2) all future versions of a DLL are backward compatible with any earlier versions.
Now, that first rule is typically enforced by installers, though not always. But that second rule is harder to enforce because it relies on the developers who created the shared DLLs to understand the backward compatibility contraints that they operate under. If a newer version of the DLL is not compatible with the old one, then any programs that used the old version will break once a program is installed that replaces it the shared DLL with a newer version.
And from experience, we know that even the most harmless-looking change carries a risk that somebody was relying on the old behavior, perhaps entirely inadvertently, such as assuming that a function consumes only a specific amount of stack space and in particular leaves certain stack memory unmodified. This means that the simple act of adding a new local variable to your function is potentially a breaking change.
Nowadays, programs avoid this problem by trying to be more self-contained with few shared DLLs, and by using packaging systems liks MSIX to allow unrelated programs to share a common installation of popular DLLs, while still avoiding the “unwanted version upgrade” problem.
The post A question about the maximimum number of values in a registry key raises questions about the question appeared first on The Old New Thing.
Google is planning big changes for Android in 2026 aimed at combating malware across the entire device ecosystem. Starting in September, Google will begin restricting application sideloading with its developer verification program, but not everyone is on board. Android Ecosystem President Sameer Samat tells Ars that the company has been listening to feedback, and the result is the newly unveiled advanced flow, which will allow power users to skip app verification.
With its new limits on sideloading, Android phones will only install apps that come from verified developers. To verify, devs releasing apps outside of Google Play will have to provide identification, upload a copy of their signing keys, and pay a $25 fee. It all seems rather onerous for people who just want to make apps without Google's intervention.
Apps that come from unverified developers won't be installable on Android phones—unless you use the new advanced flow, which will be buried in the developer settings.
When sideloading apps today, Android phones alert the user to the "unknown sources" toggle in the settings, and there's a flow to help you turn it on. The verification bypass is different and will not be revealed to users. You have to know where this is and proactively turn it on yourself, and it's not a quick process. Here are the steps:
The actual legwork to activate this feature only takes a few seconds, but the 24-hour countdown makes it something you cannot do spur of the moment. But why 24 hours? According to Samat, this is designed to combat the rising use of high-pressure social engineering attacks, in which the scammer convinces the victim they have to install an app immediately to avoid severe consequences.
You'll have to wait 24 hours to bypass verification.
Credit:
Google
"In that 24-hour period, we think it becomes much harder for attackers to persist their attack," said Samat. "In that time, you can probably find out that your loved one isn't really being held in jail or that your bank account isn't really under attack."
But for people who are sure they don't want Google's verification system to get in the way of sideloading any old APK they come across, they don't have to wait until they encounter an unverified app to get started. You only have to select the "indefinitely" option once on a phone, and you can turn dev options off again afterward.
According to Samat, Google feels a responsibility to Android users worldwide, and things are different than they used to be with more than 3 billion active devices out there.
"For a lot of people in the world, their phone is their only computer, and it stores some of their most private information," Samat said. "Over the years, we've evolved the platform to keep it open while also keeping it safe. And I want to emphasize, if the platform isn't safe, people aren't going to use it, and that's a lose-lose situation for everyone, including developers."
But what does that safety look like? Google swears it's not interested in the content of apps, and it won't be checking proactively when developers register. This is only about identity verification—you should know when you're installing an app that it's not an imposter and does not come from known purveyors of malware. If a verified developer distributes malware, they're unlikely to remain verified. And what is malware? For Samat, malware in the context of developer verification is an application package that "causes harm to the user's device or personal data that the user did not intend."
So a rootkit can be malware, but a rootkit you downloaded intentionally because you want root access on your phone is not malware, from Samat's perspective. Likewise, an alternative YouTube client that bypasses Google's ads and feature limits isn't causing the kind of harm that would lead to issues with verification. But these are just broad strokes; Google has not commented on any specific apps.
Google says sideloading isn't going away, but it is changing.
Credit:
Google
Google is proceeding cautiously with the verification rollout, and some details are still spotty. Privacy advocates have expressed concern that verification will create a database that puts independent developers at risk of legal action. Samat says that Google does push back on judicial orders for user data when they are improper. The company further suggests it's not intending to create a permanent list of developer identities that would be vulnerable to legal demands. We've asked for more detail on what data Google retains from the verification process and for what length of time.
There is also concern that developers living in sanctioned nations might be unable to verify due to the required fee. Google notes that the verification process may vary across countries and was not created specifically to bar developers in places like Cuba or Iran. We've asked for details on how Google will handle these edge cases and will update if we learn more.
Android users in most of the world don't have to worry about developer verification yet, but that day is coming. In September, verification enforcement will begin in Brazil, Singapore, Indonesia, and Thailand. Impersonation and guided scams are more common in these regions, so Google is starting there before expanding verification globally next year. Google has stressed that the advanced flow will be available before the initial rollout in September.
Google stands by its assertion that users are 50 times more likely to get malware outside Google Play than in it. A big part of the gap, Samat says, is Google's decision in 2023 to begin verifying developer identities in the Play Store. This provided a framework for universal developer verification. While there are certainly reasons Google might like the control verification gives it, the Android team has felt real pressure from regulators in areas with malware issues to address platform security.
"In a lot of countries, there is chatter about if this isn't safer, then there may need to be regulatory action to lock down more of this stuff," Samat told Ars Technica. "I don't think that it's well understood that this is a real security concern in a number of countries."
Google has already started delivering the verifier to devices around the world—it's integrated with Android 16.1, which launched late in 2025. Eventually, the verifier and advanced flow will be on all currently supported Android devices. However, the UI will be consistent, with Google providing all the components and scare screens. So what you see here should be similar to what appears on your phone in a few months, regardless of who made it.
PCAST, the President’s Council of Advisors on Science and Technology, is generally not a high-profile group. It tends to be noticed when things go wrong, such as when the PCAST head named by Biden had to resign due to abusive behavior. Biden, who was generally supportive of science, didn't even name the members of PCAST until eight months after his inauguration. So it's no surprise that an administration that's been hostile to science took even longer to staff its version of the group.
The list of appointees was finally released on Wednesday, and it's notable for its almost complete absence of scientists. There are still nine unfilled vacancies on the council, so it's possible more scientists will be named later. But for now, PCAST is heavily tilted toward extremely wealthy technology figures.
These include investor Marc Andreessen, Google's Sergey Brin, Michael Dell of Dell, Larry Ellison of Oracle, Jensen Huang of NVIDIA, Lisa Su of AMD, and Mark Zuckerberg of Meta. But many of the lesser known names have similar backgrounds. Previously named chairs of PCAST are investor David Sacks and a former investment company CFO and current head of the Office of Science and Technology Policy, John Kratsios. Of the new appointees, Safra Catz also comes from Oracle, Fred Ehrsam co-founded Coinbase, and David Friedberg is another investor.
A few of the new members actually have some background in academic research. Both Jacob DeWitte and Bob Mumgaard got PhDs from MIT before founding nuclear companies: DeWitte is the CEO of the small modular nuclear startup Oklo, and Mumgaard is the CEO of Commonwealth Fusion Systems. Su has a PhD as well, although she's been in executive positions for many years. John Martinis is a Nobel Prize winner for his work on quantum physics; he played a critical role in the development of Google's quantum computing efforts and has since been involved in two additional quantum computing startups.
This is not the council you'd name if you were at all interested in the role of fundamental research in enabling technology development. It's more appropriate if your focus is on investing in well-proven commercial technologies. In keeping with that, the announcement says, "Under President Trump, PCAST will focus on topics related to the opportunities and challenges that emerging technologies present to the American workforce, and ensuring all Americans thrive in the Golden Age of Innovation."
While PCAST isn't a high-profile group, it can play a useful role in analyzing emerging science and technology that doesn't neatly fall within the remit of any single agency. You can get a sense of that by looking at the reports it prepared during the Obama administration, which addressed fundamental issues like antibiotic resistance and applied work like advanced manufacturing.
While this council appears to be poorly prepared to understand the needs and function of fundamental academic research, it's pretty clear that none of that is a priority for this administration, and naming academics to this group is unlikely to change that trajectory. So while there's still a chance that researchers could be named in the future, there may not be any useful role for them.
Read more of this story at Slashdot.